49 research outputs found

    An exact goodness-of-fit test based on the occupancy problems to study zero-inflation and zero-deflation in biological dosimetry data

    Get PDF
    The goal in biological dosimetry is to estimate the dose of radiation that a suspected irradiated individual has received. For that, the analysis of aberrations (most commonly dicentric chromosome aberrations) in scored cells is performed and dose response calibration curves are built. In whole body irradiation (WBI) with X- and gamma-rays, the number of aberrations in samples is properly described by the Poisson distribution, although in partial body irradiation (PBI) the excess of zeros provided by the non-irradiated cells leads, for instance, to the Zero-Inflated Poisson distribution. Different methods are used to analyse the dosimetry data taking into account the distribution of the sample. In order to test the Poisson distribution against the Zero-Inflated Poisson distribution, several asymptotic and exact methods have been proposed which are focused on the dispersion of the data. In this work, we suggest an exact test for the Poisson distribution focused on the zero-inflation of the data developed by Rao and Chakravarti (Some small sample tests of significance for a Poisson distribution. Biometrics 1956;12 : 264–82.), derived from the problems of occupancy. An approximation based on the standard Normal distribution is proposed in those cases where the computation of the exact test can be tedious. A Monte Carlo Simulation study was performed in order to estimate empirical confidence levels and powers of the exact test and other tests proposed in the literature. Different examples of applications based on in vitro data and also data recorded in several radiation accidents are presented and discussed. A Shiny application which computes the exact test and other interesting goodness-of-fit tests for the Poisson distribution is presented in order to provide them to all interested researchers

    Temporal dynamics of Middle East respiratory syndrome coronavirus in the Arabian Peninsula, 2012-2017

    Get PDF
    Middle East respiratory syndrome coronavirus (MERS-CoV) remains a notable disease and poses a significant threat to global public health. The Arabian Peninsula is considered a major global epicentre for the disease and the virus has crossed regional and continental boundaries since 2012. In this study, we focused on exploring the temporal dynamics of MERS-CoV in human populations in the Arabian Peninsula between 2012 and 2017, using publicly available data on case counts and combining two analytical methods. Disease progression was assessed by quantifying the time-dependent reproductive number (TD-Rs), while case series temporal pattern was modelled using the AutoRegressive Integrated Moving Average (ARIMA). We accounted for geographical variability between three major affected regions in Saudi Arabia including Eastern Province, Riyadh and Makkah. In Saudi Arabia, the epidemic size was large with TD-Rs >1, indicating significant spread until 2017. In both Makkah and Riyadh regions, the epidemic progression reached its peak in April 2014 (TD-Rs > 7), during the highest incidence period of MERS-CoV cases. In Eastern Province, one unique super-spreading event (TD-R > 10) was identified in May 2013, which comprised of the most notable cases of human-to-human transmission. Best-fitting ARIMA model inferred statistically significant biannual seasonality in Riyadh region, a region characterised by heavy seasonal camel-related activities. However, no statistical evidence of seasonality was identified in Eastern Province and Makkah. Instead, both areas were marked by an endemic pattern of cases with sporadic outbreaks. Our study suggested new insights into the epidemiology of the virus, including inferences about epidemic progression and evidence for seasonality. Despite the inherent limitations of the available data, our conclusions provide further guidance to currently implement risk-based surveillance in high-risk populations and, subsequently, improve related interventions strategies against the epidemic at country and regional levels.info:eu-repo/semantics/publishedVersio

    Evidence-based medicine among internal medicine residents in a community hospital program using smart phones

    Get PDF
    BACKGROUND: This study implemented and evaluated a point-of-care, wireless Internet access using smart phones for information retrieval during daily clinical rounds and academic activities of internal medicine residents in a community hospital. We did the project to assess the feasibility of using smart phones as an alternative to reach online medical resources because we were unable to find previous studies of this type. In addition, we wanted to learn what Web-based information resources internal medicine residents were using and whether providing bedside, real-time access to medical information would be perceived useful for patient care and academic activities. METHODS: We equipped the medical teams in the hospital wards with smart phones (mobile phone/PDA hybrid devices) to provide immediate access to evidence-based resources developed at the National Library of Medicine as well as to other medical Websites. The emphasis of this project was to measure the convenience and feasibility of real-time access to current medical literature using smart phones. RESULTS: The smart phones provided real-time mobile access to medical literature during daily rounds and clinical activities in the hospital. Physicians found these devices easy to use. A post-study survey showed that the information retrieved was perceived to be useful for patient care and academic activities. CONCLUSION: In community hospitals and ambulatory clinics without wireless networks where the majority of physicians work, real-time access to current medical literature may be achieved through smart phones. Immediate availability of reliable and updated information obtained from authoritative sources on the Web makes evidence-based practice in a community hospital a reality

    A UMLS-based spell checker for natural language processing in vaccine safety

    Get PDF
    BACKGROUND: The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. METHODS: We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. RESULTS: We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. CONCLUSION: We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest

    Linking genes to literature: text mining, information extraction, and retrieval applications for biology

    Get PDF
    Efficient access to information contained in online scientific literature collections is essential for life science research, playing a crucial role from the initial stage of experiment planning to the final interpretation and communication of the results. The biological literature also constitutes the main information source for manual literature curation used by expert-curated databases. Following the increasing popularity of web-based applications for analyzing biological data, new text-mining and information extraction strategies are being implemented. These systems exploit existing regularities in natural language to extract biologically relevant information from electronic texts automatically. The aim of the BioCreative challenge is to promote the development of such tools and to provide insight into their performance. This review presents a general introduction to the main characteristics and applications of currently available text-mining systems for life sciences in terms of the following: the type of biological information demands being addressed; the level of information granularity of both user queries and results; and the features and methods commonly exploited by these applications. The current trend in biomedical text mining points toward an increasing diversification in terms of application types and techniques, together with integration of domain-specific resources such as ontologies. Additional descriptions of some of the systems discussed here are available on the internet

    Can smoking be child abuse?

    No full text
    corecore